110 research outputs found

    DTKI: a new formalized PKI with no trusted parties

    Get PDF
    The security of public key validation protocols for web-based applications has recently attracted attention because of weaknesses in the certificate authority model, and consequent attacks. Recent proposals using public logs have succeeded in making certificate management more transparent and verifiable. However, those proposals involve a fixed set of authorities. This means an oligopoly is created. Another problem with current log-based system is their heavy reliance on trusted parties that monitor the logs. We propose a distributed transparent key infrastructure (DTKI), which greatly reduces the oligopoly of service providers and allows verification of the behaviour of trusted parties. In addition, this paper formalises the public log data structure and provides a formal analysis of the security that DTKI guarantees.Comment: 19 page

    Fault Independence in Blockchain

    Full text link
    Byzantine Fault-Tolerant (BFT) protocols have been proposed to tolerate malicious behaviors in state machine replications. With classic BFT protocols, the total number of replicas is known and fixed a priori. The resilience of BFT protocols, i.e., the number of tolerated Byzantine replicas (denoted f ), is derived from the total number of replicas according to the quorum theory. To guarantee that an attacker cannot control more than f replicas, so to guarantee safety, it is vital to ensure fault independence among all replicas. This in practice is achieved by enforcing diverse configurations of replicas, i.e., each replica has a unique configuration, avoiding f fault compromises more than f replicas. While managing replica diversity in BFT protocols has been studied in permissioned environments with a small number of replicas, no prior work has discussed the fault independence in a permissionless environment (such as public blockchains) where anyone can join and leave the system at any time. This is particularly challenging due to the following two facts. First, with permissionless environment, any one can join as a replica at any time and no global coordinator can be relied on to manage replica diversity. Second, while great progress has been made to scale consensus algorithms to thousands of replicas, the replica diversity cannot provide fault independence at this scale, limiting practical and meaningful resilience. This paper provides the first discussion on the impact of fault independence on permissionless blockchains, provides discussions on replica configuration diversity, quantifies replica diversity by using entropy, and defines optimal fault independence.Comment: Disrupt Track of DSN 202

    ANCHOR: logically-centralized security for Software-Defined Networks

    Get PDF
    While the centralization of SDN brought advantages such as a faster pace of innovation, it also disrupted some of the natural defenses of traditional architectures against different threats. The literature on SDN has mostly been concerned with the functional side, despite some specific works concerning non-functional properties like 'security' or 'dependability'. Though addressing the latter in an ad-hoc, piecemeal way, may work, it will most likely lead to efficiency and effectiveness problems. We claim that the enforcement of non-functional properties as a pillar of SDN robustness calls for a systemic approach. As a general concept, we propose ANCHOR, a subsystem architecture that promotes the logical centralization of non-functional properties. To show the effectiveness of the concept, we focus on 'security' in this paper: we identify the current security gaps in SDNs and we populate the architecture middleware with the appropriate security mechanisms, in a global and consistent manner. Essential security mechanisms provided by anchor include reliable entropy and resilient pseudo-random generators, and protocols for secure registration and association of SDN devices. We claim and justify in the paper that centralizing such mechanisms is key for their effectiveness, by allowing us to: define and enforce global policies for those properties; reduce the complexity of controllers and forwarding devices; ensure higher levels of robustness for critical services; foster interoperability of the non-functional property enforcement mechanisms; and promote the security and resilience of the architecture itself. We discuss design and implementation aspects, and we prove and evaluate our algorithms and mechanisms, including the formalisation of the main protocols and the verification of their core security properties using the Tamarin prover.Comment: 42 pages, 4 figures, 3 tables, 5 algorithms, 139 reference

    DECIM: Detecting Endpoint Compromise In Messaging

    Get PDF
    We present DECIM, an approach to solve the challenge of detecting endpoint compromise in messaging. DECIM manages and refreshes encryption/decryption keys in an automatic and transparent way: it makes it necessary for uses of the key to be inserted in an append-only log, which the device owner can interrogate in order to detect misuse. We propose a multi-device messaging protocol that exploits our concept to allow users to detect unauthorised usage of their device keys. It is co-designed with a formal model, and we verify its core security property using the Tamarin prover. We present a proof-of-concept implementation providing the main features required for deployment. We find that DECIM messaging is efficient even for millions of users. The methods we introduce are not intended to replace existing methods used to keep keys safe (such as hardware devices, careful procedures, or key refreshment techniques). Rather, our methods provide a useful and effective additional layer of security

    General Congestion Attack on HTLC-Based Payment Channel Networks

    Get PDF
    Payment Channel Networks (PCNs) have been a promising approach to scale blockchains. However, PCNs have limited liquidity: large-amount or multi-hop payments may fail. The major threat of PCNs liquidity is payment griefing, where the adversary who acts as the payee keeps withholding the payment, so that coins involved in the payment cannot be used for routing other payments before the payment expires. Payment griefing gives adversaries a chance to launch the congestion attack, where the adversary griefs a large number of payments and paralyses the entire PCN. Understanding congestion attacks, including their strategies and impact, is crucial for designing PCNs with better liquidity guarantees. However, existing research has only focused on the specific attacking strategies and specific aspects of their impact on PCNs. We fill this gap by studying the general congestion attack. Compared to existing attack strategies, in our framework each step serves an orthogonal purpose and is customisable, allowing the adversary to focus on different aspects of the liquidity. To evaluate the attack\u27s impact, we propose a generic method of quantifying PCNs\u27 liquidity and effectiveness of the congestion attacks. We evaluate our general congestion attacks on Bitcoin\u27s Lightning Network, and show that with direct channels to 1.5% richest nodes, and ∼\sim 0.0096 BTC of cost, the adversary can launch a congestion attack that locks 47% (∼\sim280 BTC) coins in the network; reduces success rate of payments by 16.0%∼\sim60.0%; increases fee of payments by 4.5%∼\sim16.0%; increases average attempts of payments by 42.0%∼\sim115.3%; and increase the number of bankruptcy nodes (i.e., nodes with insufficient balance for making normal-size payments) by 26.6%∼\sim109.4%, where the amounts of payments range from 0.001 to 0.019 BTC

    Mitigating private key compromise

    Get PDF
    Cryptosystems rely on the assumption that the computer end-points can securely store and use cryptographic keys. Yet, this assumption is rather hard to justify in practice. New software vulnerabilities are discovered every day, and malware is pervasive on mobile devices and desktop PCs. This thesis provides research on how to mitigate private key compromise in three different cases. The first case considers compromised signing keys of certificate authorities in public key infrastructure. To address this problem, we analyse and evaluate existing prominent certificate management systems, and propose a new system called "Distributed and Transparent Key Infrastructure", which is secure even if all service providers collude together. The second case considers the key compromise in secure communication. We develop a simple approach that either guarantees the confidentiality of messages sent to a device even if the device was previously compromised, or allows the user to detect that confidentiality failed. We propose a multi-device messaging protocol that exploits our concept to allow users to detect unauthorised usage of their device keys. The third case considers the key compromise in secret distribution. We develop a self-healing system, which provides a proactive security guarantee: an attacker can learn a secret only if s/he can compromise all servers simultaneously in a short period

    Automatically Detecting the Misuse of Secrets: Foundations, Design Principles, and Applications

    Get PDF
    We develop foundations and several constructions for security protocols that can automatically detect, without false positives, if a secret (such as a key or password) has been misused. Such constructions can be used, e.g., to automatically shut down compromised services, or to automatically revoke misused secrets to minimize the effects of compromise. Our threat model includes malicious agents, (temporarily or permanently) compromised agents, and clones. Previous works have studied domain-specific partial solutions to this problem. For example, Google’s Certificate Transparency aims to provide infrastructure to detect the misuse of a certificate authority’s signing key, logs have been used for detecting endpoint compromise, and protocols have been proposed to detect cloned RFID/smart cards. Contrary to these existing approaches, for which the designs are interwoven with domain-specific considerations and which usually do not enable fully automatic response (i.e., they need human assessment), our approach shows where automatic action is possible. Our results unify, provide design rationales, and suggest improvements for the existing domain-specific solutions. Based on our analysis, we construct several mechanisms for the detection of misuse. Our mechanisms enable automatic response, such as revoking keys or shutting down services, thereby substantially limiting the impact of a compromise. In several case studies, we show how our mechanisms can be used to substantially increase the security guarantees of a wide range of systems, such as web logins, payment systems, or electronic door locks. For example, we propose and formally verify an improved version of Cloudflare’s Keyless SSL protocol that enables key misuse detection

    SoK: Diving into DAG-based Blockchain Systems

    Full text link
    Blockchain plays an important role in cryptocurrency markets and technology services. However, limitations on high latency and low scalability retard their adoptions and applications in classic designs. Reconstructed blockchain systems have been proposed to avoid the consumption of competitive transactions caused by linear sequenced blocks. These systems, instead, structure transactions/blocks in the form of Directed Acyclic Graph (DAG) and consequently re-build upper layer components including consensus, incentives, \textit{etc.} The promise of DAG-based blockchain systems is to enable fast confirmation (complete transactions within million seconds) and high scalability (attach transactions in parallel) without significantly compromising security. However, this field still lacks systematic work that summarises the DAG technique. To bridge the gap, this Systematization of Knowledge (SoK) provides a comprehensive analysis of DAG-based blockchain systems. Through deconstructing open-sourced systems and reviewing academic researches, we conclude the main components and featured properties of systems, and provide the approach to establish a DAG. With this in hand, we analyze the security and performance of several leading systems, followed by discussions and comparisons with concurrent (scaling blockchain) techniques. We further identify open challenges to highlight the potentiality of DAG-based solutions and indicate their promising directions for future research.Comment: Full versio

    Rational Ponzi Games in Algorithmic Stablecoin

    Full text link
    Algorithmic stablecoins (AS) are one special type of stablecoins that are not backed by any asset (equiv. without collateral). They stand to revolutionize the way a sovereign fiat operates. As implemented, these coins are poorly stabilized in most cases, easily deviating from the price target or even falling into a catastrophic collapse (a.k.a. Death spiral), and are as a result dismissed as a Ponzi scheme. However, is this the whole picture? In this paper, we try to reveal the truth and clarify such a deceptive concept. We find that Ponzi is basically a financial protocol that pays existing investors with funds collected from new ones. Running a Ponzi, however, does not necessarily imply that any participant is in any sense losing out, as long as the game can be perpetually rolled over. Economists call such realization as a \textit{rational Ponzi game}. We thereby propose a rational model in the context of AS and draw its holding conditions. We apply the model to examine: \textit{whether or not the algorithmic stablecoin is a rational Ponzi game.} Accordingly, we discuss two types of algorithmic stablecoins (\text{Rebase} \& \text{Seigniorage shares}) and dig into the historical market performance of two impactful projects (\text{Ampleforth} \& \text{TerraUSD}, respectively) to demonstrate the effectiveness of our model.Comment: Accepted by CryptoEx@ICBC 202
    • …
    corecore